Introduction to Pandas

pandas is a Python package providing fast, flexible, and expressive data structures designed to work with relational or labeled data both. It is a fundamental high-level building block for doing practical, real world data analysis in Python.

pandas is well suited for:

  • Tabular data with heterogeneously-typed columns, as in an SQL table or Excel spreadsheet
  • Ordered and unordered (not necessarily fixed-frequency) time series data.
  • Arbitrary matrix data (homogeneously typed or heterogeneous) with row and column labels
  • Any other form of observational / statistical data sets. The data actually need not be labeled at all to be placed into a pandas data structure

Key features:

  • Easy handling of missing data
  • Size mutability: columns can be inserted and deleted from DataFrame and higher dimensional objects
  • Automatic and explicit data alignment: objects can be explicitly aligned to a set of labels, or the data can be aligned automatically
  • Powerful, flexible group by functionality to perform split-apply-combine operations on data sets
  • Intelligent label-based slicing, fancy indexing, and subsetting of large data sets
  • Intuitive merging and joining data sets
  • Flexible reshaping and pivoting of data sets
  • Hierarchical labeling of axes
  • Robust IO tools for loading data from flat files, Excel files, databases, and HDF5
  • Time series functionality: date range generation and frequency conversion, moving window statistics, moving window linear regressions, date shifting and lagging, etc.

In [1]:
import pandas as pd
import numpy as np
pd.options.mode.chained_assignment = None  # default='warn'

Pandas Data Structures

Series

A Series is a single vector of data (like a NumPy array) with an index that labels each element in the vector.


In [2]:
counts = pd.Series([632, 1638, 569, 115])
counts


Out[2]:
0     632
1    1638
2     569
3     115
dtype: int64

If an index is not specified, a default sequence of integers is assigned as the index. A NumPy array comprises the values of the Series, while the index is a pandas Index object.


In [3]:
counts.values


Out[3]:
array([ 632, 1638,  569,  115])

In [4]:
counts.index


Out[4]:
RangeIndex(start=0, stop=4, step=1)

We can assign meaningful labels to the index, if they are available:


In [5]:
bacteria = pd.Series([632, 1638, 569, 115], 
    index=['Firmicutes', 'Proteobacteria', 'Actinobacteria', 'Bacteroidetes'])

bacteria


Out[5]:
Firmicutes         632
Proteobacteria    1638
Actinobacteria     569
Bacteroidetes      115
dtype: int64

These labels can be used to refer to the values in the Series.


In [6]:
bacteria['Actinobacteria']


Out[6]:
569

In [7]:
bacteria[[name.endswith('bacteria') for name in bacteria.index]]


Out[7]:
Proteobacteria    1638
Actinobacteria     569
dtype: int64

In [8]:
[name.endswith('bacteria') for name in bacteria.index]


Out[8]:
[False, True, True, False]

Notice that the indexing operation preserved the association between the values and the corresponding indices.

We can still use positional indexing if we wish.


In [9]:
bacteria[0]


Out[9]:
632

We can give both the array of values and the index meaningful labels themselves:


In [10]:
bacteria.name = 'counts'
bacteria.index.name = 'phylum'
bacteria


Out[10]:
phylum
Firmicutes         632
Proteobacteria    1638
Actinobacteria     569
Bacteroidetes      115
Name: counts, dtype: int64

NumPy's math functions and other operations can be applied to Series without losing the data structure.


In [11]:
# natural logarithm
np.log(bacteria)
# log base 10
np.log10(bacteria)


Out[11]:
phylum
Firmicutes        2.800717
Proteobacteria    3.214314
Actinobacteria    2.755112
Bacteroidetes     2.060698
Name: counts, dtype: float64

We can also filter according to the values in the Series:


In [12]:
bacteria[bacteria>1000]


Out[12]:
phylum
Proteobacteria    1638
Name: counts, dtype: int64

A Series can be thought of as an ordered key-value store. In fact, we can create one from a dict:


In [13]:
bacteria_dict = {'Firmicutes': 632, 'Proteobacteria': 1638, 'Actinobacteria': 569,
                 'Bacteroidetes': 115}
pd.Series(bacteria_dict)


Out[13]:
Actinobacteria     569
Bacteroidetes      115
Firmicutes         632
Proteobacteria    1638
dtype: int64

Notice that the Series is created in key-sorted order.

If we pass a custom index to Series, it will select the corresponding values from the dict, and treat indices without corrsponding values as missing. Pandas uses the NaN (not a number) type for missing values.


In [14]:
bacteria2 = pd.Series(bacteria_dict, 
                      index=['Cyanobacteria','Firmicutes',
                             'Proteobacteria','Actinobacteria'])
bacteria2


Out[14]:
Cyanobacteria        NaN
Firmicutes         632.0
Proteobacteria    1638.0
Actinobacteria     569.0
dtype: float64

In [15]:
bacteria2.isnull()


Out[15]:
Cyanobacteria      True
Firmicutes        False
Proteobacteria    False
Actinobacteria    False
dtype: bool

Critically, the labels are used to align data when used in operations with other Series objects:


In [16]:
bacteria + bacteria2


Out[16]:
Actinobacteria    1138.0
Bacteroidetes        NaN
Cyanobacteria        NaN
Firmicutes        1264.0
Proteobacteria    3276.0
dtype: float64

Contrast this with NumPy arrays, where arrays of the same length will combine values element-wise; adding Series combined values with the same label in the resulting series. Notice also that the missing values were propogated by addition.

DataFrame

Inevitably, we want to be able to store, view and manipulate data that is multivariate, where for every index there are multiple fields or columns of data, often of varying data type.

A DataFrame is a tabular data structure, encapsulating multiple series like columns in a spreadsheet. Data are stored internally as a 2-dimensional object, but the DataFrame allows us to represent and manipulate higher-dimensional data.


In [17]:
data = pd.DataFrame({'value':[632, 1638, 569, 115, 433, 1130, 754, 555],
                     'patient':[1, 1, 1, 1, 2, 2, 2, 2],
                     'phylum':['Firmicutes', 'Proteobacteria', 'Actinobacteria', 
    'Bacteroidetes', 'Firmicutes', 'Proteobacteria', 'Actinobacteria', 'Bacteroidetes']})
data


Out[17]:
patient phylum value
0 1 Firmicutes 632
1 1 Proteobacteria 1638
2 1 Actinobacteria 569
3 1 Bacteroidetes 115
4 2 Firmicutes 433
5 2 Proteobacteria 1130
6 2 Actinobacteria 754
7 2 Bacteroidetes 555

Notice the DataFrame is sorted by column name. We can change the order by indexing them in the order we desire:


In [18]:
data[['phylum','value','patient']]


Out[18]:
phylum value patient
0 Firmicutes 632 1
1 Proteobacteria 1638 1
2 Actinobacteria 569 1
3 Bacteroidetes 115 1
4 Firmicutes 433 2
5 Proteobacteria 1130 2
6 Actinobacteria 754 2
7 Bacteroidetes 555 2

A DataFrame has a second index, representing the columns:


In [19]:
data.columns


Out[19]:
Index(['patient', 'phylum', 'value'], dtype='object')

The dtypes attribute reveals the data type for each column in our DataFrame.

  • int64 is numeric integer values
  • object strings (letters and numbers)
  • float64 floating-point values

In [20]:
data.dtypes


Out[20]:
patient     int64
phylum     object
value       int64
dtype: object

If we wish to access columns, we can do so either by dict-like indexing or by attribute:


In [21]:
data['patient']


Out[21]:
0    1
1    1
2    1
3    1
4    2
5    2
6    2
7    2
Name: patient, dtype: int64

In [22]:
data.patient


Out[22]:
0    1
1    1
2    1
3    1
4    2
5    2
6    2
7    2
Name: patient, dtype: int64

In [23]:
type(data.value)


Out[23]:
pandas.core.series.Series

In [24]:
data[['value']]


Out[24]:
value
0 632
1 1638
2 569
3 115
4 433
5 1130
6 754
7 555

Notice this is different than with Series, where dict-like indexing retrieved a particular element (row).

If we want access to a row in a DataFrame, we index its loc attribute.


In [25]:
data.loc[3]


Out[25]:
patient                1
phylum     Bacteroidetes
value                115
Name: 3, dtype: object

Exercise 1

Try out these commands to see what they return:

  • data.head()
  • data.tail(3)
  • data.shape

In [26]:
data.head() # returns the first (5 by default) rows of data.


Out[26]:
patient phylum value
0 1 Firmicutes 632
1 1 Proteobacteria 1638
2 1 Actinobacteria 569
3 1 Bacteroidetes 115
4 2 Firmicutes 433

In [27]:
data.tail(3) # returns the 3 last rows of data


Out[27]:
patient phylum value
5 2 Proteobacteria 1130
6 2 Actinobacteria 754
7 2 Bacteroidetes 555

In [28]:
data.shape # returns the dimension of data (nbr rows, nbr cols)


Out[28]:
(8, 3)

An alternative way of initializing a DataFrame is with a list of dicts:


In [29]:
data = pd.DataFrame([{'patient': 1, 'phylum': 'Firmicutes', 'value': 632},
                    {'patient': 1, 'phylum': 'Proteobacteria', 'value': 1638},
                    {'patient': 1, 'phylum': 'Actinobacteria', 'value': 569},
                    {'patient': 1, 'phylum': 'Bacteroidetes', 'value': 115},
                    {'patient': 2, 'phylum': 'Firmicutes', 'value': 433},
                    {'patient': 2, 'phylum': 'Proteobacteria', 'value': 1130},
                    {'patient': 2, 'phylum': 'Actinobacteria', 'value': 754},
                    {'patient': 2, 'phylum': 'Bacteroidetes', 'value': 555}])

In [30]:
data


Out[30]:
patient phylum value
0 1 Firmicutes 632
1 1 Proteobacteria 1638
2 1 Actinobacteria 569
3 1 Bacteroidetes 115
4 2 Firmicutes 433
5 2 Proteobacteria 1130
6 2 Actinobacteria 754
7 2 Bacteroidetes 555

Its important to note that the Series returned when a DataFrame is indexted is merely a view on the DataFrame, and not a copy of the data itself. So you must be cautious when manipulating this data:


In [31]:
vals = data.value
vals


Out[31]:
0     632
1    1638
2     569
3     115
4     433
5    1130
6     754
7     555
Name: value, dtype: int64

In [32]:
vals[5] = 0
vals


Out[32]:
0     632
1    1638
2     569
3     115
4     433
5       0
6     754
7     555
Name: value, dtype: int64

If we plan on modifying an extracted Series, its a good idea to make a copy.


In [33]:
vals = data.value.copy()
vals[5] = 1000
vals


Out[33]:
0     632
1    1638
2     569
3     115
4     433
5    1000
6     754
7     555
Name: value, dtype: int64

We can create or modify columns by assignment:


In [34]:
data.value[[3,4,6]] = [14, 21, 5]
data


Out[34]:
patient phylum value
0 1 Firmicutes 632
1 1 Proteobacteria 1638
2 1 Actinobacteria 569
3 1 Bacteroidetes 14
4 2 Firmicutes 21
5 2 Proteobacteria 0
6 2 Actinobacteria 5
7 2 Bacteroidetes 555

In [35]:
data['year'] = 2013
data


Out[35]:
patient phylum value year
0 1 Firmicutes 632 2013
1 1 Proteobacteria 1638 2013
2 1 Actinobacteria 569 2013
3 1 Bacteroidetes 14 2013
4 2 Firmicutes 21 2013
5 2 Proteobacteria 0 2013
6 2 Actinobacteria 5 2013
7 2 Bacteroidetes 555 2013

But note, we cannot use the attribute indexing method to add a new column:


In [36]:
data.treatment = 1
data


Out[36]:
patient phylum value year
0 1 Firmicutes 632 2013
1 1 Proteobacteria 1638 2013
2 1 Actinobacteria 569 2013
3 1 Bacteroidetes 14 2013
4 2 Firmicutes 21 2013
5 2 Proteobacteria 0 2013
6 2 Actinobacteria 5 2013
7 2 Bacteroidetes 555 2013

In [37]:
data.treatment


Out[37]:
1

Exercise 2

From the data table above, create an index to return all rows for which the phylum name ends in "bacteria" and the value is greater than 1000.


Find the values of 'phylum' ending in 'bacteria'


In [38]:
colwitbacteria = [col for col in data['phylum'] if col.endswith('bacteria')]
colwitbacteria


Out[38]:
['Proteobacteria', 'Actinobacteria', 'Proteobacteria', 'Actinobacteria']

then filter the rows having one of the 'bacteria' values


In [39]:
rowswithbacteria = data[data['phylum'].isin(colwitbacteria)]

then take the values bigger than 1000


In [40]:
rowswithbacteria[rowswithbacteria.value > 1000]


Out[40]:
patient phylum value year
1 1 Proteobacteria 1638 2013

Note that it is probably faster to first filter the values bigger than 1000 as it filters more values out.

Specifying a Series as a new columns cause its values to be added according to the DataFrame's index:


In [41]:
treatment = pd.Series([0]*4 + [1]*2)
treatment


Out[41]:
0    0
1    0
2    0
3    0
4    1
5    1
dtype: int64

In [42]:
data['treatment'] = treatment
data


Out[42]:
patient phylum value year treatment
0 1 Firmicutes 632 2013 0.0
1 1 Proteobacteria 1638 2013 0.0
2 1 Actinobacteria 569 2013 0.0
3 1 Bacteroidetes 14 2013 0.0
4 2 Firmicutes 21 2013 1.0
5 2 Proteobacteria 0 2013 1.0
6 2 Actinobacteria 5 2013 NaN
7 2 Bacteroidetes 555 2013 NaN

Other Python data structures (ones without an index) need to be the same length as the DataFrame:


In [43]:
month = ['Jan', 'Feb', 'Mar', 'Apr']
# data['month'] = month # throws error (done on puropse)

In [44]:
data['month'] = ['Jan']*len(data)
data


Out[44]:
patient phylum value year treatment month
0 1 Firmicutes 632 2013 0.0 Jan
1 1 Proteobacteria 1638 2013 0.0 Jan
2 1 Actinobacteria 569 2013 0.0 Jan
3 1 Bacteroidetes 14 2013 0.0 Jan
4 2 Firmicutes 21 2013 1.0 Jan
5 2 Proteobacteria 0 2013 1.0 Jan
6 2 Actinobacteria 5 2013 NaN Jan
7 2 Bacteroidetes 555 2013 NaN Jan

We can use the drop method to remove rows or columns, which by default drops rows. We can be explicit by using the axis argument:


In [45]:
data_nomonth = data.drop('month', axis=1)
data_nomonth


Out[45]:
patient phylum value year treatment
0 1 Firmicutes 632 2013 0.0
1 1 Proteobacteria 1638 2013 0.0
2 1 Actinobacteria 569 2013 0.0
3 1 Bacteroidetes 14 2013 0.0
4 2 Firmicutes 21 2013 1.0
5 2 Proteobacteria 0 2013 1.0
6 2 Actinobacteria 5 2013 NaN
7 2 Bacteroidetes 555 2013 NaN

We can extract the underlying data as a simple ndarray by accessing the values attribute:


In [46]:
data.values


Out[46]:
array([[1, 'Firmicutes', 632, 2013, 0.0, 'Jan'],
       [1, 'Proteobacteria', 1638, 2013, 0.0, 'Jan'],
       [1, 'Actinobacteria', 569, 2013, 0.0, 'Jan'],
       [1, 'Bacteroidetes', 14, 2013, 0.0, 'Jan'],
       [2, 'Firmicutes', 21, 2013, 1.0, 'Jan'],
       [2, 'Proteobacteria', 0, 2013, 1.0, 'Jan'],
       [2, 'Actinobacteria', 5, 2013, nan, 'Jan'],
       [2, 'Bacteroidetes', 555, 2013, nan, 'Jan']], dtype=object)

Notice that because of the mix of string and integer (and NaN) values, the dtype of the array is object. The dtype will automatically be chosen to be as general as needed to accomodate all the columns.


In [47]:
df = pd.DataFrame({'foo': [1,2,3], 'bar':[0.4, -1.0, 4.5]})
df.values


Out[47]:
array([[ 0.4,  1. ],
       [-1. ,  2. ],
       [ 4.5,  3. ]])

Pandas uses a custom data structure to represent the indices of Series and DataFrames.


In [48]:
data.index


Out[48]:
RangeIndex(start=0, stop=8, step=1)

Index objects are immutable:


In [49]:
# data.index[0] = 15 # throws error

This is so that Index objects can be shared between data structures without fear that they will be changed.


In [50]:
bacteria2.index = bacteria.index

In [51]:
bacteria2


Out[51]:
phylum
Firmicutes           NaN
Proteobacteria     632.0
Actinobacteria    1638.0
Bacteroidetes      569.0
dtype: float64

Importing data

A key, but often under-appreciated, step in data analysis is importing the data that we wish to analyze. Though it is easy to load basic data structures into Python using built-in tools or those provided by packages like NumPy, it is non-trivial to import structured data well, and to easily convert this input into a robust data structure:

genes = np.loadtxt("genes.csv", delimiter=",", dtype=[('gene', '|S10'), ('value', '<f4')])

Pandas provides a convenient set of functions for importing tabular data in a number of formats directly into a DataFrame object. These functions include a slew of options to perform type inference, indexing, parsing, iterating and cleaning automatically as data are imported.

Let's start with some more bacteria data, stored in csv format.


In [52]:
!cat Data/microbiome.csv


Taxon,Patient,Group,Tissue,Stool
Firmicutes,1,0,136,4182
Firmicutes,2,1,1174,703
Firmicutes,3,0,408,3946
Firmicutes,4,1,831,8605
Firmicutes,5,0,693,50
Firmicutes,6,1,718,717
Firmicutes,7,0,173,33
Firmicutes,8,1,228,80
Firmicutes,9,0,162,3196
Firmicutes,10,1,372,32
Firmicutes,11,0,4255,4361
Firmicutes,12,1,107,1667
Firmicutes,13,0,96,223
Firmicutes,14,1,281,2377
Proteobacteria,1,0,2469,1821
Proteobacteria,2,1,839,661
Proteobacteria,3,0,4414,18
Proteobacteria,4,1,12044,83
Proteobacteria,5,0,2310,12
Proteobacteria,6,1,3053,547
Proteobacteria,7,0,395,2174
Proteobacteria,8,1,2651,767
Proteobacteria,9,0,1195,76
Proteobacteria,10,1,6857,795
Proteobacteria,11,0,483,666
Proteobacteria,12,1,2950,3994
Proteobacteria,13,0,1541,816
Proteobacteria,14,1,1307,53
Actinobacteria,1,0,1590,4
Actinobacteria,2,1,25,2
Actinobacteria,3,0,259,300
Actinobacteria,4,1,568,7
Actinobacteria,5,0,1102,9
Actinobacteria,6,1,678,377
Actinobacteria,7,0,260,58
Actinobacteria,8,1,424,233
Actinobacteria,9,0,548,21
Actinobacteria,10,1,201,83
Actinobacteria,11,0,42,75
Actinobacteria,12,1,109,59
Actinobacteria,13,0,51,183
Actinobacteria,14,1,310,204
Bacteroidetes,1,0,67,0
Bacteroidetes,2,1,0,0
Bacteroidetes,3,0,85,5
Bacteroidetes,4,1,143,7
Bacteroidetes,5,0,678,2
Bacteroidetes,6,1,4829,209
Bacteroidetes,7,0,74,651
Bacteroidetes,8,1,169,254
Bacteroidetes,9,0,106,10
Bacteroidetes,10,1,73,381
Bacteroidetes,11,0,30,359
Bacteroidetes,12,1,51,51
Bacteroidetes,13,0,2473,2314
Bacteroidetes,14,1,102,33
Other,1,0,195,18
Other,2,1,42,2
Other,3,0,316,43
Other,4,1,202,40
Other,5,0,116,0
Other,6,1,527,12
Other,7,0,357,11
Other,8,1,106,11
Other,9,0,67,14
Other,10,1,203,6
Other,11,0,392,6
Other,12,1,28,25
Other,13,0,12,22
Other,14,1,305,32

This table can be read into a DataFrame using read_csv:


In [53]:
mb = pd.read_csv("Data/microbiome.csv")
mb


Out[53]:
Taxon Patient Group Tissue Stool
0 Firmicutes 1 0 136 4182
1 Firmicutes 2 1 1174 703
2 Firmicutes 3 0 408 3946
3 Firmicutes 4 1 831 8605
4 Firmicutes 5 0 693 50
5 Firmicutes 6 1 718 717
6 Firmicutes 7 0 173 33
7 Firmicutes 8 1 228 80
8 Firmicutes 9 0 162 3196
9 Firmicutes 10 1 372 32
10 Firmicutes 11 0 4255 4361
11 Firmicutes 12 1 107 1667
12 Firmicutes 13 0 96 223
13 Firmicutes 14 1 281 2377
14 Proteobacteria 1 0 2469 1821
15 Proteobacteria 2 1 839 661
16 Proteobacteria 3 0 4414 18
17 Proteobacteria 4 1 12044 83
18 Proteobacteria 5 0 2310 12
19 Proteobacteria 6 1 3053 547
20 Proteobacteria 7 0 395 2174
21 Proteobacteria 8 1 2651 767
22 Proteobacteria 9 0 1195 76
23 Proteobacteria 10 1 6857 795
24 Proteobacteria 11 0 483 666
25 Proteobacteria 12 1 2950 3994
26 Proteobacteria 13 0 1541 816
27 Proteobacteria 14 1 1307 53
28 Actinobacteria 1 0 1590 4
29 Actinobacteria 2 1 25 2
... ... ... ... ... ...
40 Actinobacteria 13 0 51 183
41 Actinobacteria 14 1 310 204
42 Bacteroidetes 1 0 67 0
43 Bacteroidetes 2 1 0 0
44 Bacteroidetes 3 0 85 5
45 Bacteroidetes 4 1 143 7
46 Bacteroidetes 5 0 678 2
47 Bacteroidetes 6 1 4829 209
48 Bacteroidetes 7 0 74 651
49 Bacteroidetes 8 1 169 254
50 Bacteroidetes 9 0 106 10
51 Bacteroidetes 10 1 73 381
52 Bacteroidetes 11 0 30 359
53 Bacteroidetes 12 1 51 51
54 Bacteroidetes 13 0 2473 2314
55 Bacteroidetes 14 1 102 33
56 Other 1 0 195 18
57 Other 2 1 42 2
58 Other 3 0 316 43
59 Other 4 1 202 40
60 Other 5 0 116 0
61 Other 6 1 527 12
62 Other 7 0 357 11
63 Other 8 1 106 11
64 Other 9 0 67 14
65 Other 10 1 203 6
66 Other 11 0 392 6
67 Other 12 1 28 25
68 Other 13 0 12 22
69 Other 14 1 305 32

70 rows × 5 columns

Notice that read_csv automatically considered the first row in the file to be a header row.

We can override default behavior by customizing some the arguments, like header, names or index_col.


In [54]:
pd.read_csv("Data/microbiome.csv", header=None).head()


Out[54]:
0 1 2 3 4
0 Taxon Patient Group Tissue Stool
1 Firmicutes 1 0 136 4182
2 Firmicutes 2 1 1174 703
3 Firmicutes 3 0 408 3946
4 Firmicutes 4 1 831 8605

read_csv is just a convenience function for read_table, since csv is such a common format:


In [55]:
mb = pd.read_table("Data/microbiome.csv", sep=',')

The sep argument can be customized as needed to accomodate arbitrary separators. For example, we can use a regular expression to define a variable amount of whitespace, which is unfortunately very common in some data formats:

sep='\s+'

For a more useful index, we can specify the first two columns, which together provide a unique index to the data.


In [56]:
mb = pd.read_csv("Data/microbiome.csv", index_col=['Patient','Taxon'])
mb.head()


Out[56]:
Group Tissue Stool
Patient Taxon
1 Firmicutes 0 136 4182
2 Firmicutes 1 1174 703
3 Firmicutes 0 408 3946
4 Firmicutes 1 831 8605
5 Firmicutes 0 693 50

This is called a hierarchical index, which we will revisit later in the section.

If we have sections of data that we do not wish to import (for example, known bad data), we can populate the skiprows argument:


In [57]:
pd.read_csv("Data/microbiome.csv", skiprows=[3,4,6]).head()


Out[57]:
Taxon Patient Group Tissue Stool
0 Firmicutes 1 0 136 4182
1 Firmicutes 2 1 1174 703
2 Firmicutes 5 0 693 50
3 Firmicutes 7 0 173 33
4 Firmicutes 8 1 228 80

If we only want to import a small number of rows from, say, a very large data file we can use nrows:


In [58]:
pd.read_csv("Data/microbiome.csv", nrows=4)


Out[58]:
Taxon Patient Group Tissue Stool
0 Firmicutes 1 0 136 4182
1 Firmicutes 2 1 1174 703
2 Firmicutes 3 0 408 3946
3 Firmicutes 4 1 831 8605

Alternately, if we want to process our data in reasonable chunks, the chunksize argument will return an iterable object that can be employed in a data processing loop. For example, our microbiome data are organized by bacterial phylum, with 14 patients represented in each:


In [59]:
pd.read_csv("Data/microbiome.csv", chunksize=14)


Out[59]:
<pandas.io.parsers.TextFileReader at 0x113784550>

In [60]:
data_chunks = pd.read_csv("Data/microbiome.csv", chunksize=14)

mean_tissue = pd.Series({chunk.Taxon[0]: chunk.Tissue.mean() for chunk in data_chunks})
    
mean_tissue


Out[60]:
Actinobacteria     440.500000
Bacteroidetes      634.285714
Firmicutes         688.142857
Other              204.857143
Proteobacteria    3036.285714
dtype: float64

Most real-world data is incomplete, with values missing due to incomplete observation, data entry or transcription error, or other reasons. Pandas will automatically recognize and parse common missing data indicators, including NA and NULL.


In [61]:
!cat Data/microbiome_missing.csv


Taxon,Patient,Tissue,Stool
Firmicutes,1,632,305
Firmicutes,2,136,4182
Firmicutes,3,,703
Firmicutes,4,408,3946
Firmicutes,5,831,8605
Firmicutes,6,693,50
Firmicutes,7,718,717
Firmicutes,8,173,33
Firmicutes,9,228,NA
Firmicutes,10,162,3196
Firmicutes,11,372,-99999
Firmicutes,12,4255,4361
Firmicutes,13,107,1667
Firmicutes,14,?,223
Firmicutes,15,281,2377
Proteobacteria,1,1638,3886
Proteobacteria,2,2469,1821
Proteobacteria,3,839,661
Proteobacteria,4,4414,18
Proteobacteria,5,12044,83
Proteobacteria,6,2310,12
Proteobacteria,7,3053,547
Proteobacteria,8,395,2174
Proteobacteria,9,2651,767
Proteobacteria,10,1195,76
Proteobacteria,11,6857,795
Proteobacteria,12,483,666
Proteobacteria,13,2950,3994
Proteobacteria,14,1541,816
Proteobacteria,15,1307,53
Actinobacteria,1,569,648
Actinobacteria,2,1590,4
Actinobacteria,3,25,2
Actinobacteria,4,259,300
Actinobacteria,5,568,7
Actinobacteria,6,1102,9
Actinobacteria,7,678,377
Actinobacteria,8,260,58
Actinobacteria,9,424,233
Actinobacteria,10,548,21
Actinobacteria,11,201,83
Actinobacteria,12,42,75
Actinobacteria,13,109,59
Actinobacteria,14,51,183
Actinobacteria,15,310,204
Bacteroidetes,1,115,380
Bacteroidetes,2,67,0
Bacteroidetes,3,0,0
Bacteroidetes,4,85,5
Bacteroidetes,5,143,7
Bacteroidetes,6,678,2
Bacteroidetes,7,4829,209
Bacteroidetes,8,74,651
Bacteroidetes,9,169,254
Bacteroidetes,10,106,10
Bacteroidetes,11,73,381
Bacteroidetes,12,30,359
Bacteroidetes,13,51,51
Bacteroidetes,14,2473,2314
Bacteroidetes,15,102,33
Other,1,114,277
Other,2,195,18
Other,3,42,2
Other,4,316,43
Other,5,202,40
Other,6,116,0
Other,7,527,12
Other,8,357,11
Other,9,106,11
Other,10,67,14
Other,11,203,6
Other,12,392,6
Other,13,28,25
Other,14,12,22
Other,15,305,32

In [62]:
pd.read_csv("Data/microbiome_missing.csv").head(20)


Out[62]:
Taxon Patient Tissue Stool
0 Firmicutes 1 632 305.0
1 Firmicutes 2 136 4182.0
2 Firmicutes 3 NaN 703.0
3 Firmicutes 4 408 3946.0
4 Firmicutes 5 831 8605.0
5 Firmicutes 6 693 50.0
6 Firmicutes 7 718 717.0
7 Firmicutes 8 173 33.0
8 Firmicutes 9 228 NaN
9 Firmicutes 10 162 3196.0
10 Firmicutes 11 372 -99999.0
11 Firmicutes 12 4255 4361.0
12 Firmicutes 13 107 1667.0
13 Firmicutes 14 ? 223.0
14 Firmicutes 15 281 2377.0
15 Proteobacteria 1 1638 3886.0
16 Proteobacteria 2 2469 1821.0
17 Proteobacteria 3 839 661.0
18 Proteobacteria 4 4414 18.0
19 Proteobacteria 5 12044 83.0

Above, Pandas recognized NA and an empty field as missing data.


In [63]:
pd.isnull(pd.read_csv("Data/microbiome_missing.csv")).head(20)


Out[63]:
Taxon Patient Tissue Stool
0 False False False False
1 False False False False
2 False False True False
3 False False False False
4 False False False False
5 False False False False
6 False False False False
7 False False False False
8 False False False True
9 False False False False
10 False False False False
11 False False False False
12 False False False False
13 False False False False
14 False False False False
15 False False False False
16 False False False False
17 False False False False
18 False False False False
19 False False False False

Unfortunately, there will sometimes be inconsistency with the conventions for missing data. In this example, there is a question mark "?" and a large negative number where there should have been a positive integer. We can specify additional symbols with the na_values argument:


In [64]:
pd.read_csv("Data/microbiome_missing.csv", na_values=['?', -99999]).head(20)


Out[64]:
Taxon Patient Tissue Stool
0 Firmicutes 1 632.0 305.0
1 Firmicutes 2 136.0 4182.0
2 Firmicutes 3 NaN 703.0
3 Firmicutes 4 408.0 3946.0
4 Firmicutes 5 831.0 8605.0
5 Firmicutes 6 693.0 50.0
6 Firmicutes 7 718.0 717.0
7 Firmicutes 8 173.0 33.0
8 Firmicutes 9 228.0 NaN
9 Firmicutes 10 162.0 3196.0
10 Firmicutes 11 372.0 NaN
11 Firmicutes 12 4255.0 4361.0
12 Firmicutes 13 107.0 1667.0
13 Firmicutes 14 NaN 223.0
14 Firmicutes 15 281.0 2377.0
15 Proteobacteria 1 1638.0 3886.0
16 Proteobacteria 2 2469.0 1821.0
17 Proteobacteria 3 839.0 661.0
18 Proteobacteria 4 4414.0 18.0
19 Proteobacteria 5 12044.0 83.0

These can be specified on a column-wise basis using an appropriate dict as the argument for na_values.

Microsoft Excel

Since so much financial and scientific data ends up in Excel spreadsheets (regrettably), Pandas' ability to directly import Excel spreadsheets is valuable. This support is contingent on having one or two dependencies (depending on what version of Excel file is being imported) installed: xlrd and openpyxl (these may be installed with either pip or easy_install).

The read_excel convenience function in pandas imports a specific sheet from an Excel file


In [65]:
mb = pd.read_excel('Data/microbiome/MID2.xls', sheetname='Sheet 1', header=None)
mb.head()


Out[65]:
0 1
0 Archaea "Crenarchaeota" Thermoprotei Acidiloba... 2
1 Archaea "Crenarchaeota" Thermoprotei Acidiloba... 14
2 Archaea "Crenarchaeota" Thermoprotei Desulfuro... 23
3 Archaea "Crenarchaeota" Thermoprotei Desulfuro... 1
4 Archaea "Crenarchaeota" Thermoprotei Desulfuro... 2

There are several other data formats that can be imported into Python and converted into DataFrames, with the help of buitl-in or third-party libraries. These include JSON, XML, HDF5, relational and non-relational databases, and various web APIs. These are beyond the scope of this tutorial, but are covered in Python for Data Analysis.

Pandas Fundamentals

This section introduces the new user to the key functionality of Pandas that is required to use the software effectively.

For some variety, we will leave our digestive tract bacteria behind and employ some baseball data.


In [66]:
baseball = pd.read_csv("Data/baseball.csv", index_col='id')
baseball.head()


Out[66]:
player year stint team lg g ab r h X2b ... rbi sb cs bb so ibb hbp sh sf gidp
id
88641 womacto01 2006 2 CHN NL 19 50 6 14 1 ... 2.0 1.0 1.0 4 4.0 0.0 0.0 3.0 0.0 0.0
88643 schilcu01 2006 1 BOS AL 31 2 0 1 0 ... 0.0 0.0 0.0 0 1.0 0.0 0.0 0.0 0.0 0.0
88645 myersmi01 2006 1 NYA AL 62 0 0 0 0 ... 0.0 0.0 0.0 0 0.0 0.0 0.0 0.0 0.0 0.0
88649 helliri01 2006 1 MIL NL 20 3 0 0 0 ... 0.0 0.0 0.0 0 2.0 0.0 0.0 0.0 0.0 0.0
88650 johnsra05 2006 1 NYA AL 33 6 0 1 0 ... 0.0 0.0 0.0 0 4.0 0.0 0.0 0.0 0.0 0.0

5 rows × 22 columns

Notice that we specified the id column as the index, since it appears to be a unique identifier. We could try to create a unique index ourselves by combining player and year:


In [67]:
player_id = baseball.player + baseball.year.astype(str)
baseball_newind = baseball.copy()
baseball_newind.index = player_id
baseball_newind.head()


Out[67]:
player year stint team lg g ab r h X2b ... rbi sb cs bb so ibb hbp sh sf gidp
womacto012006 womacto01 2006 2 CHN NL 19 50 6 14 1 ... 2.0 1.0 1.0 4 4.0 0.0 0.0 3.0 0.0 0.0
schilcu012006 schilcu01 2006 1 BOS AL 31 2 0 1 0 ... 0.0 0.0 0.0 0 1.0 0.0 0.0 0.0 0.0 0.0
myersmi012006 myersmi01 2006 1 NYA AL 62 0 0 0 0 ... 0.0 0.0 0.0 0 0.0 0.0 0.0 0.0 0.0 0.0
helliri012006 helliri01 2006 1 MIL NL 20 3 0 0 0 ... 0.0 0.0 0.0 0 2.0 0.0 0.0 0.0 0.0 0.0
johnsra052006 johnsra05 2006 1 NYA AL 33 6 0 1 0 ... 0.0 0.0 0.0 0 4.0 0.0 0.0 0.0 0.0 0.0

5 rows × 22 columns

This looks okay, but let's check:


In [68]:
baseball_newind.index.is_unique


Out[68]:
False

So, indices need not be unique. Our choice is not unique because some players change teams within years.


In [69]:
pd.Series(baseball_newind.index).value_counts()


Out[69]:
benitar012007    2
cirilje012007    2
loftoke012007    2
wellsda012007    2
gomezch022007    2
coninje012007    2
hernaro012007    2
trachst012007    2
sweenma012007    2
wickmbo012007    2
claytro012007    2
francju012007    2
stinnke012007    1
smoltjo012007    1
biggicr012007    1
greensh012007    1
schilcu012006    1
mesajo012007     1
myersmi012006    1
perezne012007    1
williwo022007    1
martipe022007    1
sandere022007    1
zaungr012007     1
seleaa012006     1
floydcl012007    1
ramirma022007    1
kleskry012007    1
gonzalu012006    1
loaizes012007    1
                ..
stairma012007    1
edmonji012007    1
oliveda022007    1
sprinru012007    1
cormirh012007    1
parkch012007     1
wakefti012007    1
myersmi012007    1
weathda012007    1
rodriiv012007    1
helliri012006    1
johnsra052007    1
vizquom012007    1
tavarju012007    1
griffke022007    1
delgaca012007    1
thomafr042007    1
valenjo032007    1
aloumo012007     1
witasja012007    1
easleda012007    1
timlimi012007    1
guarded012007    1
thomeji012007    1
rogerke012007    1
finlest012006    1
johnsra052006    1
graffto012007    1
clemero022007    1
sosasa012007     1
dtype: int64

The most important consequence of a non-unique index is that indexing by label will return multiple values for some labels:


In [ ]:
baseball_newind.loc['wickmbo012007']

We will learn more about indexing below.

We can create a truly unique index by combining player, team and year:


In [ ]:
player_unique = baseball.player + baseball.team + baseball.year.astype(str)
baseball_newind = baseball.copy()
baseball_newind.index = player_unique
baseball_newind.head()

In [ ]:
baseball_newind.index.is_unique

We can create meaningful indices more easily using a hierarchical index; for now, we will stick with the numeric id field as our index.

Manipulating indices

Reindexing allows users to manipulate the data labels in a DataFrame. It forces a DataFrame to conform to the new index, and optionally, fill in missing data if requested.

A simple use of reindex is to alter the order of the rows:


In [ ]:
baseball.reindex(baseball.index[::-1]).head()

Notice that the id index is not sequential. Say we wanted to populate the table with every id value. We could specify and index that is a sequence from the first to the last id numbers in the database, and Pandas would fill in the missing data with NaN values:


In [ ]:
id_range = range(baseball.index.values.min(), baseball.index.values.max())
baseball.reindex(id_range).head()

Missing values can be filled as desired, either with selected values, or by rule:


In [ ]:
baseball.reindex(id_range, method='ffill', columns=['player','year']).head()

In [ ]:
baseball.reindex(id_range, fill_value='charliebrown', columns=['player']).head()

Keep in mind that reindex does not work if we pass a non-unique index series.

We can remove rows or columns via the drop method:


In [ ]:
baseball.shape

In [ ]:
baseball.drop([89525, 89526])

In [ ]:
baseball.drop(['ibb','hbp'], axis=1)

Indexing and Selection

Indexing works analogously to indexing in NumPy arrays, except we can use the labels in the Index object to extract values in addition to arrays of integers.


In [ ]:
# Sample Series object
hits = baseball_newind.h
hits

In [ ]:
# Numpy-style indexing
hits[:3]

In [ ]:
# Indexing by label
hits[['womacto01CHN2006','schilcu01BOS2006']]

We can also slice with data labels, since they have an intrinsic order within the Index:


In [ ]:
hits['womacto01CHN2006':'gonzalu01ARI2006']

In [ ]:
hits['womacto01CHN2006':'gonzalu01ARI2006'] = 5
hits

In a DataFrame we can slice along either or both axes:


In [ ]:
baseball_newind[['h','ab']]

In [ ]:
baseball_newind[baseball_newind.ab>500]

For a more concise (and readable) syntax, we can use the new query method to perform selection on a DataFrame. Instead of having to type the fully-specified column, we can simply pass a string that describes what to select. The query above is then simply:


In [ ]:
baseball_newind.query('ab > 500')

The DataFrame.index and DataFrame.columns are placed in the query namespace by default. If you want to refer to a variable in the current namespace, you can prefix the variable with @:


In [ ]:
min_ab = 450

In [ ]:
baseball_newind.query('ab > @min_ab')

The indexing field loc allows us to select subsets of rows and columns in an intuitive way:


In [ ]:
baseball_newind.loc['gonzalu01ARI2006', ['h','X2b', 'X3b', 'hr']]

In [ ]:
baseball_newind.loc[:'myersmi01NYA2006', 'hr']

In addition to using loc to select rows and columns by label, pandas also allows indexing by position using the iloc attribute.

So, we can query rows and columns by absolute position, rather than by name:


In [ ]:
baseball_newind.iloc[:5, 5:8]

Exercise 3

You can use the isin method query a DataFrame based upon a list of values as follows:

data['phylum'].isin(['Firmacutes', 'Bacteroidetes'])

Use isin to find all players that played for the Los Angeles Dodgers (LAN) or the San Francisco Giants (SFN). How many records contain these values?


In [ ]:
baseball[baseball['team'].isin(['LAN', 'SFN'])]

15 records contains those values

Operations

DataFrame and Series objects allow for several operations to take place either on a single object, or between two or more objects.

For example, we can perform arithmetic on the elements of two objects, such as combining baseball statistics across years. First, let's (artificially) construct two Series, consisting of home runs hit in years 2006 and 2007, respectively:


In [ ]:
hr2006 = baseball.loc[baseball.year==2006, 'hr']
hr2006.index = baseball.player[baseball.year==2006]

hr2007 = baseball.loc[baseball.year==2007, 'hr']
hr2007.index = baseball.player[baseball.year==2007]

In [ ]:
hr2007

Now, let's add them together, in hopes of getting 2-year home run totals:


In [ ]:
hr_total = hr2006 + hr2007
hr_total

Pandas' data alignment places NaN values for labels that do not overlap in the two Series. In fact, there are only 6 players that occur in both years.


In [ ]:
hr_total[hr_total.notnull()]

While we do want the operation to honor the data labels in this way, we probably do not want the missing values to be filled with NaN. We can use the add method to calculate player home run totals by using the fill_value argument to insert a zero for home runs where labels do not overlap:


In [ ]:
hr2007.add(hr2006, fill_value=0)

Operations can also be broadcast between rows or columns.

For example, if we subtract the maximum number of home runs hit from the hr column, we get how many fewer than the maximum were hit by each player:


In [ ]:
baseball.hr - baseball.hr.max()

Or, looking at things row-wise, we can see how a particular player compares with the rest of the group with respect to important statistics


In [ ]:
baseball.loc[89521, "player"]

In [ ]:
stats = baseball[['h','X2b', 'X3b', 'hr']]
diff = stats - stats.loc[88641]
diff[:10]

We can also apply functions to each column or row of a DataFrame


In [ ]:
stats.apply(np.median)

In [ ]:
def range_calc(x):
    return x.max() - x.min()

In [ ]:
stat_range = lambda x: x.max() - x.min()
stats.apply(stat_range)

Lets use apply to calculate a meaningful baseball statistics, slugging percentage:

$$SLG = \frac{1B + (2 \times 2B) + (3 \times 3B) + (4 \times HR)}{AB}$$

And just for fun, we will format the resulting estimate.


In [ ]:
def slugging(x): 
    bases = x['h']-x['X2b']-x['X3b']-x['hr'] + 2*x['X2b'] + 3*x['X3b'] + 4*x['hr']
    ab = x['ab']+1e-6
    
    return bases/ab

baseball.apply(slugging, axis=1).round(3)

Sorting and Ranking

Pandas objects include methods for re-ordering data.


In [ ]:
baseball_newind.sort_index().head()

In [ ]:
baseball_newind.sort_index(ascending=False).head()

Try sorting the columns instead of the rows, in ascending order:


In [ ]:
baseball_newind.sort_index(axis=1).head()

We can also use sort_values to sort a Series by value, rather than by label.


In [ ]:
baseball.hr.sort_values(ascending=False)

For a DataFrame, we can sort according to the values of one or more columns using the by argument of sort_values:


In [ ]:
baseball[['player','sb','cs']].sort_values(ascending=[False,True], 
                                           by=['sb', 'cs']).head(10)

Ranking does not re-arrange data, but instead returns an index that ranks each value relative to others in the Series.


In [ ]:
baseball.hr.rank()

Ties are assigned the mean value of the tied ranks, which may result in decimal values.


In [ ]:
pd.Series([100,100]).rank()

Alternatively, you can break ties via one of several methods, such as by the order in which they occur in the dataset:


In [ ]:
baseball.hr.rank(method='first')

Calling the DataFrame's rank method results in the ranks of all columns:


In [ ]:
baseball.rank(ascending=False).head()

In [ ]:
baseball[['r','h','hr']].rank(ascending=False).head()

Exercise 4

Calculate on base percentage for each player, and return the ordered series of estimates.

$$OBP = \frac{H + BB + HBP}{AB + BB + HBP + SF}$$

define the function and appy it.


In [ ]:
def on_base_perc(pl):
    nominator = pl['h'] + pl['bb'] + pl['hbp'] #H+BB+HBP
    denom = pl['ab'] + pl['bb'] +pl['hbp'] +pl['sf']
    if(denom == 0) : # If the denom == 0 we can not compute nominator/denom, hence we retrun NaN
        return np.NaN
    return nominator / denom
baseball.apply(on_base_perc, axis=1).round(3)

and again but ordered


In [ ]:
baseball.apply(on_base_perc, axis=1).round(3).sort_values(ascending=False)

Hierarchical indexing

In the baseball example, I was forced to combine 3 fields to obtain a unique index that was not simply an integer value. A more elegant way to have done this would be to create a hierarchical index from the three fields.


In [ ]:
baseball_h = baseball.set_index(['year', 'team', 'player'])
baseball_h.head(10)

This index is a MultiIndex object that consists of a sequence of tuples, the elements of which is some combination of the three columns used to create the index. Where there are multiple repeated values, Pandas does not print the repeats, making it easy to identify groups of values.


In [ ]:
baseball_h.index[:10]

In [ ]:
baseball_h.index.is_unique

Try using this hierarchical index to retrieve Julio Franco (francju01), who played for the Atlanta Braves (ATL) in 2007:


In [ ]:
baseball_h.loc[(2007, 'ATL', 'francju01')]

Recall earlier we imported some microbiome data using two index columns. This created a 2-level hierarchical index:


In [ ]:
mb = pd.read_csv("Data/microbiome.csv", index_col=['Taxon','Patient'])

In [ ]:
mb.head(10)

With a hierachical index, we can select subsets of the data based on a partial index:


In [ ]:
mb.loc['Proteobacteria']

Hierarchical indices can be created on either or both axes. Here is a trivial example:


In [ ]:
frame = pd.DataFrame(np.arange(12).reshape(( 4, 3)), 
                  index =[['a', 'a', 'b', 'b'], [1, 2, 1, 2]], 
                  columns =[['Ohio', 'Ohio', 'Colorado'], ['Green', 'Red', 'Green']])

frame

If you want to get fancy, both the row and column indices themselves can be given names:


In [ ]:
frame.index.names = ['key1', 'key2']
frame.columns.names = ['state', 'color']
frame

With this, we can do all sorts of custom indexing:


In [ ]:
frame.loc['a', 'Ohio']

Try retrieving the value corresponding to b2 in Colorado:


fetch b2 and then Colorado


In [ ]:
frame.loc['b', 2]['Colorado']

Additionally, the order of the set of indices in a hierarchical MultiIndex can be changed by swapping them pairwise:


In [ ]:
mb.swaplevel('Patient', 'Taxon').head()

Data can also be sorted by any index level, using sortlevel:


In [ ]:
mb.sortlevel('Patient', ascending=False).head()

Missing data

The occurence of missing data is so prevalent that it pays to use tools like Pandas, which seamlessly integrates missing data handling so that it can be dealt with easily, and in the manner required by the analysis at hand.

Missing data are represented in Series and DataFrame objects by the NaN floating point value. However, None is also treated as missing, since it is commonly used as such in other contexts (e.g. NumPy).


In [ ]:
foo = pd.Series([np.nan, -3, None, 'foobar'])
foo

In [ ]:
foo.isnull()

Missing values may be dropped or indexed out:


In [ ]:
bacteria2

In [ ]:
bacteria2.dropna()

In [ ]:
bacteria2.isnull()

In [ ]:
bacteria2[bacteria2.notnull()]

By default, dropna drops entire rows in which one or more values are missing.


In [ ]:
data.dropna()

This can be overridden by passing the how='all' argument, which only drops a row when every field is a missing value.


In [ ]:
data.dropna(how='all')

This can be customized further by specifying how many values need to be present before a row is dropped via the thresh argument.


In [ ]:
data.loc[7, 'year'] = np.nan
data

In [ ]:
data.dropna(thresh=5)

This is typically used in time series applications, where there are repeated measurements that are incomplete for some subjects.

Exercise 5

Try using the axis argument to drop columns with missing values:


In [ ]:
data.dropna(axis=1)

Rather than omitting missing data from an analysis, in some cases it may be suitable to fill the missing value in, either with a default value (such as zero) or a value that is either imputed or carried forward/backward from similar data points. We can do this programmatically in Pandas with the fillna argument.


In [ ]:
bacteria2.fillna(0)

In [ ]:
data.fillna({'year': 2013, 'treatment':2})

Notice that fillna by default returns a new object with the desired filling behavior, rather than changing the Series or DataFrame in place (in general, we like to do this, by the way!).

We can alter values in-place using inplace=True.


In [ ]:
data.year.fillna(2013, inplace=True)
data

Missing values can also be interpolated, using any one of a variety of methods:


In [ ]:
bacteria2.fillna(method='bfill')

Data summarization

We often wish to summarize data in Series or DataFrame objects, so that they can more easily be understood or compared with similar data. The NumPy package contains several functions that are useful here, but several summarization or reduction methods are built into Pandas data structures.


In [ ]:
baseball.sum()

Clearly, sum is more meaningful for some columns than others. For methods like mean for which application to string variables is not just meaningless, but impossible, these columns are automatically exculded:


In [ ]:
baseball.mean()

The important difference between NumPy's functions and Pandas' methods is that the latter have built-in support for handling missing data.


In [ ]:
bacteria2

In [ ]:
bacteria2.mean()

Sometimes we may not want to ignore missing values, and allow the nan to propagate.


In [ ]:
bacteria2.mean(skipna=False)

Passing axis=1 will summarize over rows instead of columns, which only makes sense in certain situations.


In [ ]:
extra_bases = baseball[['X2b','X3b','hr']].sum(axis=1)
extra_bases.sort_values(ascending=False)

A useful summarization that gives a quick snapshot of multiple statistics for a Series or DataFrame is describe:


In [ ]:
baseball.describe()

describe can detect non-numeric data and sometimes yield useful information about it.


In [ ]:
baseball.player.describe()

We can also calculate summary statistics across multiple columns, for example, correlation and covariance.

$$cov(x,y) = \sum_i (x_i - \bar{x})(y_i - \bar{y})$$

In [ ]:
baseball.hr.cov(baseball.X2b)
$$corr(x,y) = \frac{cov(x,y)}{(n-1)s_x s_y} = \frac{\sum_i (x_i - \bar{x})(y_i - \bar{y})}{\sqrt{\sum_i (x_i - \bar{x})^2 \sum_i (y_i - \bar{y})^2}}$$

In [ ]:
baseball.hr.corr(baseball.X2b)

In [ ]:
baseball.ab.corr(baseball.h)

Try running corr on the entire baseball DataFrame to see what is returned:



In [ ]:
baseball.corr()

it returns the correlation matrix for all features


If we have a DataFrame with a hierarchical index (or indices), summary statistics can be applied with respect to any of the index levels:


In [ ]:
mb.head()

In [ ]:
mb.sum(level='Taxon')

Writing Data to Files

As well as being able to read several data input formats, Pandas can also export data to a variety of storage formats. We will bring your attention to just a couple of these.


In [ ]:
mb.to_csv("mb.csv")

The to_csv method writes a DataFrame to a comma-separated values (csv) file. You can specify custom delimiters (via sep argument), how missing values are written (via na_rep argument), whether the index is writen (via index argument), whether the header is included (via header argument), among other options.

An efficient way of storing data to disk is in binary format. Pandas supports this using Python’s built-in pickle serialization.


In [ ]:
baseball.to_pickle("baseball_pickle")

The complement to to_pickle is the read_pickle function, which restores the pickle to a DataFrame or Series:


In [ ]:
pd.read_pickle("baseball_pickle")

As Wes warns in his book, it is recommended that binary storage of data via pickle only be used as a temporary storage format, in situations where speed is relevant. This is because there is no guarantee that the pickle format will not change with future versions of Python.

Advanced Exercise: Compiling Ebola Data

The Data/ebola folder contains summarized reports of Ebola cases from three countries during the recent outbreak of the disease in West Africa. For each country, there are daily reports that contain various information about the outbreak in several cities in each country.

From these data files, use pandas to import them and create a single data frame that includes the daily totals of new cases and deaths for each country.

Our solution is in a seperate notebook